algorithmic discrimination
Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness
Deck, Luca, Müller, Jan-Laurin, Braun, Conradin, Zipperling, Domenique, Kühl, Niklas
The topic of fairness in AI, as debated in the FATE (Fairness, Accountability, Transparency, and Ethics in AI) communities, has sparked meaningful discussions in the past years. However, from a legal perspective, particularly from the perspective of European Union law, many open questions remain. Whereas algorithmic fairness aims to mitigate structural inequalities at design-level, European non-discrimination law is tailored to individual cases of discrimination after an AI model has been deployed. The AI Act might present a tremendous step towards bridging these two approaches by shifting non-discrimination responsibilities into the design stage of AI models. Based on an integrative reading of the AI Act, we comment on legal as well as technical enforcement problems and propose practical implications on bias detection and bias correction in order to specify and comply with specific technical requirements.
- Europe > Germany > Bavaria > Upper Franconia > Bayreuth (0.06)
- North America > United States > New York (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- (6 more...)
- Research Report (0.66)
- Overview (0.47)
- Law (1.00)
- Information Technology > Security & Privacy (0.95)
- Government > Regional Government > Europe Government (0.87)
- Government > Regional Government > North America Government > United States Government (0.47)
Uncovering Algorithmic Discrimination: An Opportunity to Revisit the Comparator
Alvarez, Jose M., Ruggieri, Salvatore
Causal reasoning, in particular, counterfactual reasoning plays a central role in testing for discrimination. Counterfactual reasoning materializes when testing for discrimination, what is known as the counterfactual model of discrimination, when we compare the discrimination comparator with the discrimination complainant, where the comparator is a similar (or similarly situated) profile to that of the complainant used for testing the discrimination claim of the complainant. In this paper, we revisit the comparator by presenting two kinds of comparators based on the sort of causal intervention we want to represent. We present the ceteris paribus and the mutatis mutandis comparator, where the former is the standard and the latter is a new kind of comparator. We argue for the use of the mutatis mutandis comparator, which is built on the fairness given the difference notion, for testing future algorithmic discrimination cases.
- North America > United States (0.28)
- Europe > Italy > Tuscany > Pisa Province > Pisa (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (2 more...)
Kamala Harris: Admin has duty to stop AI 'algorithmic discrimination,' ensure benefits 'shared equitably'
AI expert Marva Bailer explains how, even though there are currently laws in place, the average person has more access than ever to create deepfakes of celebrities. Vice President Kamala Harris said Monday that it's the Biden administration's "duty" to prevent "algorithmic discrimination" when it comes to the field artificial intelligence (AI), and to ensure its benefits are "shared equitably" among society. Her continuation of what some have called the administration's effort to make AI "woke" happened during her remarks alongside President Biden at the White House just before he signed an executive order establishing AI standards for private companies. "I believe we have a moral, ethical and societal duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and ensure that everyone is able to enjoy its benefits. Since we took office, President Biden and I have worked to uphold that duty," Harris told a crowd gathered in the White House's East Room.
- North America > United States > District of Columbia > Washington (0.08)
- Asia > Middle East > Israel (0.06)
Biden administration pushing to make AI woke, adhere to far-left agenda: watchdog
The president speaks after meeting with AI experts in effort to manage its risks. The Biden administration is actively seeking to use artificial intelligence to promote a woke, progressive ideology with left-wing activists leading the effort, according to research from a conservative watchdog group. The American Accountability Foundation conducted research into the administration's plans for AI and is now warning in a memo that top U.S. officials under President Biden are seeking to inject "dangerous ideologies" into AI systems. "Under the guise of fighting'algorithmic discrimination' and'harmful bias,' the Biden administration is trying to rig AI to follow the woke left's rules," AAF president Tom Jones told Fox News Digital. "Biden is being advised on technology policy, not by scientists, but by racially obsessed social academics and activists. We're already seen the biggest tech firms in the world, like Google under Eric Schmidt, use their power to push the left's agenda. This would take the tech/woke alliance to a whole new, truly terrifying level."
- North America > United States > California (0.06)
- North America > United States > Florida > Miami-Dade County > Miami Beach (0.05)
- North America > United States > District of Columbia > Washington (0.05)
Visual Analysis of Discrimination in Machine Learning
Wang, Qianwen, Xu, Zhenhua, Chen, Zhutian, Wang, Yong, Liu, Shixia, Qu, Huamin
The growing use of automated decision-making in critical applications, such as crime prediction and college admission, has raised questions about fairness in machine learning. How can we decide whether different treatments are reasonable or discriminatory? In this paper, we investigate discrimination in machine learning from a visual analytics perspective and propose an interactive visualization tool, DiscriLens, to support a more comprehensive analysis. To reveal detailed information on algorithmic discrimination, DiscriLens identifies a collection of potentially discriminatory itemsets based on causal modeling and classification rules mining. By combining an extended Euler diagram with a matrix-based visualization, we develop a novel set visualization to facilitate the exploration and interpretation of discriminatory itemsets. A user study shows that users can interpret the visually encoded information in DiscriLens quickly and accurately. Use cases demonstrate that DiscriLens provides informative guidance in understanding and reducing algorithmic discrimination.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > China > Hong Kong (0.04)
- North America > United States > New York (0.04)
- (16 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.93)
- Banking & Finance (0.93)
- Law > Labor & Employment Law (0.46)
- Law > Civil Rights & Constitutional Law (0.46)
- Education > Educational Setting (0.34)
D.C. wants to lead the fight against AI bias
The document describes five principles that should be incorporated into AI systems to ensure their safety and transparency, limit the impact of algorithmic discrimination, and give users control over data. The document describes five principles that should be incorporated into AI systems to ensure their safety and transparency, limit the impact of algorithmic discrimination, and give users control over data.
Equalizing Credit Opportunity in Algorithms: Aligning Algorithmic Fairness Research with U.S. Fair Lending Regulation
Kumar, I. Elizabeth, Hines, Keegan E., Dickerson, John P.
Credit is an essential component of financial wellbeing in America, and unequal access to it is a large factor in the economic disparities between demographic groups that exist today. Today, machine learning algorithms, sometimes trained on alternative data, are increasingly being used to determine access to credit, yet research has shown that machine learning can encode many different versions of "unfairness," thus raising the concern that banks and other financial institutions could -- potentially unwittingly -- engage in illegal discrimination through the use of this technology. In the US, there are laws in place to make sure discrimination does not happen in lending and agencies charged with enforcing them. However, conversations around fair credit models in computer science and in policy are often misaligned: fair machine learning research often lacks legal and practical considerations specific to existing fair lending policy, and regulators have yet to issue new guidance on how, if at all, credit risk models should be utilizing practices and techniques from the research community. This paper aims to better align these sides of the conversation. We describe the current state of credit discrimination regulation in the United States, contextualize results from fair ML research to identify the specific fairness concerns raised by the use of machine learning in lending, and discuss regulatory opportunities to address these concerns.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > Iowa (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (14 more...)
FTC Mulls New Artificial Intelligence Regulation
The Federal Trade Commission (FTC) is considering a wide range of options, including new rules and guidelines, to tackle data privacy concerns and algorithmic discrimination. FTC s Chair Lina Khan, in a letter to Senator Richard Blumenthal (D-CT), outlined her goals to "protect Americans from unfair or deceptive practices online" and in particular, Khan said that the FTC is considering rulemaking to address "lax security practices, data privacy abuses and algorithmic decision-making that may result in unlawful discrimination." The FTC s letter comes in response to a letter from several lawmakers, including Senator Blumenthal, who urged the FTC to start a rulemaking process that would "protect consumer privacy, promote civil rights and set clear safeguards on the collection and use of data in the digital economy." "Rulemaking may prove a useful tool to address the breadth of challenges that can result from commercial surveillance and other data practices […] and could establish clear market-wide requirements," Khan wrote. The FTC can resort to its rulemaking authority to address unfair or deceptive practices that occur commonly, instead of relying on actions against individual companies.
- Law > Business Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Addressing Algorithmic Discrimination
It should no longer be a surprise that algorithms can discriminate. A criminal risk-assessment algorithm is far more likely to erroneously predict a Black defendant will commit a crime in the future than a white defendant.2 Ad-targeting algorithms promote job opportunities to race- and gender-skewed audiences, showing secretary and supermarket job ads to far more women than men.1 A hospital's resource-allocation algorithm favored white over Black patients with the same level of medical need.5 Algorithmic discrimination is particularly troubling when it affects consequential social decisions, such as who gets released from jail, or has access to a loan or health care. Employment is a prime example. Employers are increasingly relying on algorithmic tools to recruit, screen, and select job applicants by making predictions about which candidates will be good employees.
- North America > United States > New York > Erie County > Buffalo (0.05)
- North America > United States > Missouri > St. Louis County > St. Louis (0.05)
- North America > United States > California (0.05)
- Law (1.00)
- Health & Medicine (1.00)
- Education > Assessment & Standards (0.49)
- Education > Educational Setting (0.47)
EU: Artificial Intelligence Regulation Threatens Social Safety Net, Warns HRW
The European Union's plan to regulate artificial intelligence is ill-equipped to protect people from flawed algorithms that deprive them of lifesaving benefits and discriminate against vulnerable populations, Human Rights Watch said in report on the regulation. The European Parliament should amend the regulation to better protect people's rights to social security and an adequate standard of living. The 28-page report in the form of a question-and-answer document, "How the EU's Flawed Artificial Intelligence Regulation Endangers the Social Safety Net," examines how governments are turning to algorithms to allocate social security support and prevent benefits fraud. Drawing on case studies in Ireland, France, the Netherlands, Austria, Poland, and the United Kingdom, Human Rights Watch found that this trend toward automation can discriminate against people who need social security support, compromise their privacy, and make it harder for them to qualify for government assistance. But the regulation will do little to prevent or rectify these harms.
- Europe > Austria (0.27)
- Europe > Netherlands (0.26)
- Europe > United Kingdom (0.25)
- (2 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Government (1.00)